1 Introduction

Optimization is important in many fields, including in data science. In manufacturing, where every decision is critical to the process and the profit of organization, optimization is often employed, from the number of each products produced, how the unit is scheduled for production, get the best or optimal process parameter, and also the routing determination such as the traveling salesman problem. In data science, we are familiar with model tuning, where we tune our model in order to improve the model performance. Optimization algorithm can help us to get a better model performance.

Bayesian Optimization is one of many optimization algorithm that can be employed to various cases. Bayesian Optimization employ a probabilistic model to optimize the fitness function. The advantage of Bayesian Optimization is when evaluations of the fitness function are expensive to perform — as is the case when it requires training a machine learning algorithm — it is easy to justify some extra computation to make better decisions1. It is best-suited for optimization over continuous domains of less than 20 dimensions, and tolerates stochastic noise in function evaluations2.

This post is dedicated to learn how Bayesian Optimization works and their application in various business and data science case. The algorithm will be run in R.

1.1 About

1.2 Learning Objectives

  • Learn how Bayesian Optimization works
  • Learn how to apply Bayesian Optimization in business and data science problem
  • Compare Bayesian Optimization with Particle Swarm Optimization

2 Bayesian Optimization: Concept

The general procedure when works with Bayesian Optimization is as follows:

Bayesian Optimization consists of two main components: a Bayesian statistical model for modeling the objective function, and an acquisition function for deciding where to sample next. The Gaussian process is often employed for the statistical model due to its flexibility and tractability.

2.1 Gaussian Process

The model used for approximating the objective function is called surrogate model. Gaussian process is one of them. Whenever we have an unknown value in Bayesian statistics, we suppose that it was drawn at random by nature from some prior probability distribution. Gaussian Process takes this prior distribution to be multivariate normal, with a specific mean vector and covariance matrix.

The prior distribution on \([f(x_1), f(x_2), ..., f(x_k)]\) is:

\[f(x_{1:k}) \sim \mathcal{N} (\mu_0(x_{1:k}),\ \Sigma_0(x_{1:k}, x_{1:k})) \]

\(\mathcal{N}(x,y)\) : Gaussian/Normal random distribution

\(\mu_0(x_{i:k})\) : Mean function of each \(x_i\). It is common to use \(m(x)=0\) as Gaussian Process is flexible enough to model the mean arbitrarily well3

\(\Sigma_0(x_{i:k},x_{i:k})\) : Kernel function/covariance function at each pair of \(x_i\)

Gaussian process also provides a Bayesian posterior probability distribution that describes potential values for \(f(x)\) at the candidate point \(x\). Each time we observe f at a new point, this posterior distribution is updated. The Gaussian process prior distribution can be converted into posterior distirbution after having some observed some \(f\) or \(y\) values.

\[f(x)|f(x_{1:n}) \sim \mathcal{N} (\mu_n(x), \ \sigma_n^2(x))\]

Where:

\[\mu_n(x) = \Sigma_0(x,x_{i:n}) * \Sigma_0(x_{i:n},x_{i:n})^{-1} * (f(x_{1:n})-\mu_0(x_{1:n})) + \mu_0(x)\]

\[\sigma_n^2(x) = \Sigma_0(x,x) - \Sigma_0(x,x_{i:n}) * \Sigma_0(x_{i:n},x_{i:n})^{-1} * \Sigma_0(x_{i:n},x)\]

Below is the example of Gaussian Process posterior over function graph. The blue dot represent the fitness function of 3 sample points. The solid red line represent the estimate of the fitness function while the dashed line represent the Bayesian credible intervals (similar to confidence intervals).

Let’s illustrate the process with GPfit package. Suppose I have a function below:

Create noise-free \(f\) for \(n_0\) based on 5 points within range of [0,1].

##              x         y
## [1,] 0.0000000  75.68025
## [2,] 0.3333333  32.59273
## [3,] 0.5000000 -43.46241
## [4,] 0.6666667 -74.99929
## [5,] 1.0000000  17.33797

Create a gaussian process with GP_fit() with power exponential correlation function. You can also use Matern correlation function list(type = "matern", nu = 5/2)4.

After we fitted GP model, we can calculate the expected value \(μ(x)\) at each possible value of x and the corresponding uncertainty \(σ(x)\). These will be used when computing the acquisition functions over the possible values of x.

We can visualize the result.

2.2 Acquisition Function

Acquisition function is employed to choose which point of \(x\) that we will take the sample next. The chosen point is those with the optimum value of acquisition function. The acquisition function calculate the value that would be generated by evaluation of the fitness function at a new point \(x\), based on the current posterior distribution over \(f\).

Below is the illustration of the acquisition function value curve. The value is calculated using expected improvement method. Point with the highest value of the acquisition function will be sampled at the next round/iteration.

There are several choice of acquisition function, such as expected improvement, Gaussian Process upper confidence bound, entropy search, etc. Here we will illustrate the expected improvement function.

\[EI(x) = \left\{ \begin{array}{ll} (\mu(x) - f(x^+) - \xi) \Phi(Z) + \sigma(x) \phi(Z) & if \ \sigma(x) > 0 \\ 0 & if \ \sigma(x) = 0 \\ \end{array} \right. \]

Where

\[Z = \frac{\mu(x) - f(x^+) - \xi}{\sigma(x)}\]

\(f(x^+)\) : Best value of \(f(x)\) of the sample

\(\mu(x)\) : Mean of the GP posterior predictive at \(x\)

\(\sigma(x)\) : Standard deviation of the GP posterior predictive at \(x\)

\(\xi\) : xi(some call epsilon instead). Determines the amount of exploration during optimization and higher ξ values lead to more exploration. A common default value for ξ is 0.01.

\(\Phi\) : The cumulative density function (CDF) of the standard normal distribution

\(\phi\) : The probability density function (PDF) of the standard normal distribution

Suppose that y_best is the best fitness value from the sample

We can use the code below to get the expected improvement value for each x. We will use epsilon value of 0.01.

Let’s visualize the result. Create data.frame for the result and create exp_best which consists of x with the highest expected improvement value.

We can visualize the result

With this basic steps, I hope we are ready to apply Bayesian Optimization.

3 Bayesian Optimization in R

We can do Bayesian optimization in R using rBayesianOptimization package.

3.1 Business Application

3.1.1 Finance: Portofolio Optimization

The problem is replicated from Zhu et al.(2011)5. The study employed a PSO algorithm for portfolio selection and optimization in investment management.

Portfolio optimization problem is concerned with managing the portfolio of assets that minimizes the risk objectives subjected to the constraint for guaranteeing a given level of returns. One of the fundamental principles of financial investment is diversification where investors diversify their investments into different types of assets. Portfolio diversification minimizes investors exposure to risks, and maximizes returns on portfolios.

The fitness function is the adjusted Sharpe Ratio for restricted portofolio, which combines the information from mean and variance of an asset and functioned as a risk-adjusted measure of mean return, which is often used to evaluate the performance of a portfolio.

The Sharpe ratio can help to explain whether a portfolio’s excess returns are due to smart investment decisions or a result of too much risk. Although one portfolio or fund can enjoy higher returns than its peers, it is only a good investment if those higher returns do not come with an excess of additional risk.

The greater a portfolio’s Sharpe ratio, the better its risk-adjusted performance. If the analysis results in a negative Sharpe ratio, it either means the risk-free rate is greater than the portfolio’s return, or the portfolio’s return is expected to be negative.

The fitness function is shown below:

\[Max \ f(x) = \frac{\sum_{i=1}^{N} W_i*r_i - R_f}{\sum_{i=1}^{N}\sum_{j=1}^{N} W_i * W_j * \sigma_{ij}}\]

Subject To

\[\sum_{i=1}^{N} W_i = 1\] \[0 \leq W_i \leq 1\] \[i = 1, 2, ..., N\]

\(N\): Number of different assets

\(W_i\): Weight of each stock in the portfolio

\(r_i\): Return of stock i

\(R_f\): The test available rate of return of a risk-free security (i.e. the interest rate on a three-month U.S. Treasury bill)

\(\sigma_{ij}\): Covariance between returns of assets i and j,

Adjusting the portfolio weights \(w_i\), we can maximize the portfolio Sharpe Ratio in effect balancing the trade-off between maximizing the expected return and at the same time minimizing the risk.

3.1.1.1 Import Data

Data is acquired from New York Stock Exchange on Kaggle (https://www.kaggle.com/dgawlik/nyse). We will only use data from January to March of 2015 for illustration.

To get clearer name of company, let’s import the Ticker Symbol and Security.

Let’s say I have assets in 3 different stocks. I will randomly choose the stocks.

3.1.1.2 Calculate Returns

Let’s calculate the daily returns.

Let’s calculate the mean return of each stock.

The value of \(R_f\) is acquired from the latest interest rate on a three-month U.S. Treasury bill. Since the data is from 2016, we will use data from 2015 (Use data from March 27, 2015), which is 0.04%. The rate is acquired from https://ycharts.com/indicators/3_month_t_bill.

3.1.1.3 Covariance Matrix Between Portofolio

Calculate the covariance matrix between portofolio. First, we need to separate the return of each portofolio into several column by spreading them.

Create the covariance matrix.

3.1.1.5 Define Parameters

Let’s define the search boundary

Let’s set the initial sample

3.1.1.6 Run the Algorithm

Use BayesianOptimization() function to employ the algorithm. The parameters include:

  • FUN : the fitness function
  • bounds : a list of lower and upper bound of each dimension/variables
  • init_grid_dt : User specified points to sample the target function
  • init_points : Number of randomly chosen points to sample the target function before Bayesian Optimization fitting the Gaussian Process
  • n_iter : number of repeated Bayesian Optimization
  • acq : Choice of acquisition function
  • kappa : tunable parameter kappa of GP Upper Confidence Bound, to balance exploitation against exploration, increasing kappa will make the optimized hyperparameters pursuing exploration.
  • eps : tunable parameter epsilon of Expected Improvement and Probability of Improvement, to balance exploitation against exploration, increasing epsilon will make the optimized hyperparameters are more spread out across the whole range.
## elapsed = 0.04   Round = 1   w1 = 0.2876 w2 = 0.8895 w3 = 0.1428 Value = -102346787.3517 
## elapsed = 0.00   Round = 2   w1 = 0.7883 w2 = 0.6928 w3 = 0.4145 Value = -802197653.5235 
## elapsed = 0.00   Round = 3   w1 = 0.4090 w2 = 0.6405 w3 = 0.4137 Value = -214561698.7466 
## elapsed = 0.00   Round = 4   w1 = 0.8830 w2 = 0.9943 w3 = 0.3688 Value = -1552846530.5179 
## elapsed = 0.00   Round = 5   w1 = 0.9405 w2 = 0.6557 w3 = 0.1524 Value = -560428652.5203 
## elapsed = 0.00   Round = 6   w1 = 0.0456 w2 = 0.7085 w3 = 0.1388 Value = -11471892.0673 
## elapsed = 0.02   Round = 7   w1 = 0.5281 w2 = 0.5441 w3 = 0.2330 Value = -93150457.4806 
## elapsed = 0.00   Round = 8   w1 = 0.8924 w2 = 0.5941 w3 = 0.4660 Value = -907301041.4723 
## elapsed = 0.00   Round = 9   w1 = 0.5514 w2 = 0.2892 w3 = 0.2660 Value = -11356600.7449 
## elapsed = 0.00   Round = 10  w1 = 0.4566 w2 = 0.1471 w3 = 0.8578 Value = -213034020.6426 
## elapsed = 0.00   Round = 11  w1 = 0.9568 w2 = 0.9630 w3 = 0.0458 Value = -932554747.0758 
## elapsed = 0.00   Round = 12  w1 = 0.4533 w2 = 0.9023 w3 = 0.4422 Value = -636537927.4026 
## elapsed = 0.00   Round = 13  w1 = 0.6776 w2 = 0.6907 w3 = 0.7989 Value = -1362357606.0057 
## elapsed = 0.00   Round = 14  w1 = 0.5726 w2 = 0.7955 w3 = 0.1219 Value = -240100071.0720 
## elapsed = 0.00   Round = 15  w1 = 0.1029 w2 = 0.0246 w3 = 0.5609 Value = -97040726.7633 
## elapsed = 0.00   Round = 16  w1 = 0.8998 w2 = 0.4778 w3 = 0.2065 Value = -341233940.9660 
## elapsed = 0.00   Round = 17  w1 = 0.2461 w2 = 0.7585 w3 = 0.1275 Value = -17444831.2329 
## elapsed = 0.00   Round = 18  w1 = 0.0421 w2 = 0.2164 w3 = 0.7533 Value = -138639.9912 
## elapsed = 0.01   Round = 19  w1 = 0.3279 w2 = 0.3182 w3 = 0.8950 Value = -292840156.2228 
## elapsed = 0.00   Round = 20  w1 = 0.9545 w2 = 0.2316 w3 = 0.3745 Value = -314263621.5192 
## elapsed = 0.00   Round = 21  w1 = 0.9749 w2 = 0.0000 w3 = 0.0000 Value = -627666.4709 
## elapsed = 0.00   Round = 22  w1 = 0.0000 w2 = 0.0000 w3 = 1.0000 Value = 16.2381 
## elapsed = 0.00   Round = 23  w1 = 0.7114 w2 = 0.0000 w3 = 0.2841 Value = -20886.1637 
## elapsed = 0.00   Round = 24  w1 = 0.0000 w2 = 0.9980 w3 = 0.0000 Value = -4032.3122 
## elapsed = 0.00   Round = 25  w1 = 0.7790 w2 = 0.1572 w3 = 0.0581 Value = -33160.3833 
## elapsed = 0.00   Round = 26  w1 = 0.0076 w2 = 0.4095 w3 = 0.5954 Value = -157505.4064 
## elapsed = 0.00   Round = 27  w1 = 0.0000 w2 = 0.5964 w3 = 0.4066 Value = -9006.5778 
## elapsed = 0.00   Round = 28  w1 = 0.0020 w2 = 0.8404 w3 = 0.1630 Value = -28718.3692 
## elapsed = 0.00   Round = 29  w1 = 0.1507 w2 = 0.0048 w3 = 0.8444 Value = -21.0078 
## elapsed = 0.00   Round = 30  w1 = 0.0781 w2 = 0.0000 w3 = 0.9129 Value = -80038.9236 
## 
##  Best Parameters Found: 
## Round = 22   w1 = 0.0000 w2 = 0.0000 w3 = 1.0000 Value = 16.2381
## 98.28 sec elapsed

Result of the function consists of a list with 4 components:

  • Best_Par : a named vector of the best hyperparameter set found
  • Best_Value : the value of metrics achieved by the best hyperparameter set
  • History : table of bayesian optimization history
  • Pred : table with validation/cross-validation prediction for each round of bayesian optimization history

So, what is the optimum Sharpe Ratio from Bayesian optimization?

## [1] 16.23812

The greater a portfolio’s Sharpe ratio, the better its risk-adjusted performance. If the analysis results in a negative Sharpe ratio, it either means the risk-free rate is greater than the portfolio’s return, or the portfolio’s return is expected to be negative.

Let’s check the total weight of the optimum result.

## [1] 1

Based on Bayesian Optimization, here is how your asset should be distributed.

3.1.1.7 Change the Acquisition Function

Let’s try another Bayesian Optimization for the problem. We will change the acquisition function from expected improvement to Gaussian Process upper confidence limit.

## elapsed = 0.00   Round = 1   w1 = 0.2876 w2 = 0.8895 w3 = 0.1428 Value = -102346787.3517 
## elapsed = 0.00   Round = 2   w1 = 0.7883 w2 = 0.6928 w3 = 0.4145 Value = -802197653.5235 
## elapsed = 0.00   Round = 3   w1 = 0.4090 w2 = 0.6405 w3 = 0.4137 Value = -214561698.7466 
## elapsed = 0.00   Round = 4   w1 = 0.8830 w2 = 0.9943 w3 = 0.3688 Value = -1552846530.5179 
## elapsed = 0.00   Round = 5   w1 = 0.9405 w2 = 0.6557 w3 = 0.1524 Value = -560428652.5203 
## elapsed = 0.00   Round = 6   w1 = 0.0456 w2 = 0.7085 w3 = 0.1388 Value = -11471892.0673 
## elapsed = 0.00   Round = 7   w1 = 0.5281 w2 = 0.5441 w3 = 0.2330 Value = -93150457.4806 
## elapsed = 0.00   Round = 8   w1 = 0.8924 w2 = 0.5941 w3 = 0.4660 Value = -907301041.4723 
## elapsed = 0.00   Round = 9   w1 = 0.5514 w2 = 0.2892 w3 = 0.2660 Value = -11356600.7449 
## elapsed = 0.00   Round = 10  w1 = 0.4566 w2 = 0.1471 w3 = 0.8578 Value = -213034020.6426 
## elapsed = 0.00   Round = 11  w1 = 0.9568 w2 = 0.9630 w3 = 0.0458 Value = -932554747.0758 
## elapsed = 0.00   Round = 12  w1 = 0.4533 w2 = 0.9023 w3 = 0.4422 Value = -636537927.4026 
## elapsed = 0.00   Round = 13  w1 = 0.6776 w2 = 0.6907 w3 = 0.7989 Value = -1362357606.0057 
## elapsed = 0.00   Round = 14  w1 = 0.5726 w2 = 0.7955 w3 = 0.1219 Value = -240100071.0720 
## elapsed = 0.00   Round = 15  w1 = 0.1029 w2 = 0.0246 w3 = 0.5609 Value = -97040726.7633 
## elapsed = 0.00   Round = 16  w1 = 0.8998 w2 = 0.4778 w3 = 0.2065 Value = -341233940.9660 
## elapsed = 0.00   Round = 17  w1 = 0.2461 w2 = 0.7585 w3 = 0.1275 Value = -17444831.2329 
## elapsed = 0.00   Round = 18  w1 = 0.0421 w2 = 0.2164 w3 = 0.7533 Value = -138639.9912 
## elapsed = 0.00   Round = 19  w1 = 0.3279 w2 = 0.3182 w3 = 0.8950 Value = -292840156.2228 
## elapsed = 0.00   Round = 20  w1 = 0.9545 w2 = 0.2316 w3 = 0.3745 Value = -314263621.5192 
## elapsed = 0.00   Round = 21  w1 = 1.0000 w2 = 0.0000 w3 = 0.0000 Value = 3.7129 
## elapsed = 0.00   Round = 22  w1 = 0.0000 w2 = 0.0000 w3 = 1.0000 Value = 16.2381 
## elapsed = 0.00   Round = 23  w1 = 0.0000 w2 = 0.9994 w3 = 0.0000 Value = -374.2826 
## elapsed = 0.00   Round = 24  w1 = 0.6871 w2 = 0.0000 w3 = 0.3076 Value = -28791.8047 
## elapsed = 0.00   Round = 25  w1 = 0.6018 w2 = 0.3872 w3 = 0.0000 Value = -121085.4959 
## elapsed = 0.00   Round = 26  w1 = 0.0022 w2 = 0.7501 w3 = 0.2644 Value = -278397.5709 
## elapsed = 0.01   Round = 27  w1 = 0.2083 w2 = 0.0074 w3 = 0.7750 Value = -87021.5536 
## elapsed = 0.00   Round = 28  w1 = 0.6844 w2 = 0.1581 w3 = 0.1652 Value = -58926.3506 
## elapsed = 0.00   Round = 29  w1 = 0.7787 w2 = 0.2222 w3 = 0.0000 Value = -685.6368 
## elapsed = 0.00   Round = 30  w1 = 0.8910 w2 = 0.0540 w3 = 0.0551 Value = -0.4746 
## 
##  Best Parameters Found: 
## Round = 22   w1 = 0.0000 w2 = 0.0000 w3 = 1.0000 Value = 16.2381
## 1: 618.94 sec elapsed

3.1.1.8 Compare With Particle Swarm Optimization

Let’s compare the optimum Sharpe Ratio from Bayesian Optimization with another algorithm: Particle Swarm Optimization. If you are unfamiliar with the method, you can visit my post6.

Let’s redefine the fitness function to suit the PSO from pso package.

Let’s run the PSO Algorithm. PSO will run for 10,000 iterations with swarm size of 100. If in 500 iterations there is no improvement on the fitness value, the algorithm will stop.

## $par
## [1] 0.18286098 0.01961205 0.79752697
## 
## $value
## [1] -19.2006
## 
## $counts
##  function iteration  restarts 
##    107700      1077         0 
## 
## $convergence
## [1] 4
## 
## $message
## [1] "Maximal number of iterations without improvement reached"
## 16.69 sec elapsed

The solutions has Sharpe Ratio of 19.201.

Let’s check the total weight

## [1] 1

Based on PSO, here is how your asset should be distributed.

For this problem, PSO works better than Bayesian Optimization, indicated by the optimum fitness value. However, we only ran 40 function evalutions (20 from samples, 20 from iterations) with Bayesian Optimization, compared to PSO, which run more than 1000 evaluations. The trade-off is Bayesian Optimization ran slower than PSO, since the function evaluation is cheap. We will try in more complex problem via deep learning to see if the trade-off don’t change.

3.2 Machine Learning Application

We will try to classify whether a user will give a game an above average score based on the content of the reviews. We will use the neural network model. Reviews will be extracted using text mining approach. On this problem, we will optimize the hyper-parameter of the neural network. This problem is based on my previous post7.

3.2.1 Import Data

The dataset is user reviews of 100 best PC games from metacritic website. I already scraped the data, which you can download here .

Since we will use keras to build the neural network architecture, we will set the environment first.

3.2.2 Data Preprocessing

We want to clean the text by removing url and any word elongation. We will replace “?” with “questionmark” and “!” with “exclamationmark” to see if these characters can be useful in our analysis, etc.

Since we want to classify the score into above average or below average, we need to add the label into the data.

## Joining, by = "game"

Finally, we will make a document term matrix, with the row indicate each review and the columns consists of top 1024 words in the entire reviews. We will use the matrix to classify if the user will give an above average score based on the appearance of one or more terms.

## Joining, by = "word"
## Selecting by n
## Joining, by = "word"

3.2.3 Cross-Validation

We will split the data into training set, validation set, and testing set. First, we split the data into training set and testing set.

3.2.5 Define Fitness Function

We will build the neural network architecture. Our model would have several layers. There are layer dense which will scale our data using the relu activation function on the first and second layer dense. There are also layer dropout to prevent the model from overfitting. Finally, we scale back our data into range of [0,1] with the sigmoid function as the probability of our data belong to a particular class. The epoch represent the number of our model doing the feed-forward and back-propagation.

We will try to optimize the dropout rate on the 1st and 2nd layer dropout. We will also optimize the learning rate.

3.2.6 Define Parameters

Define the search boundary

Define initial search sample

3.2.7 Run the Algorithm

We will run the Bayesian Optimization with 20 iterations.

## elapsed = 10.31  Round = 1   dropout_1 = 0.2013  dropout_2 = 0.6227  learning_rate = 0.1428  Value = 0.3142 
## elapsed = 9.05   Round = 2   dropout_1 = 0.5518  dropout_2 = 0.4850  learning_rate = 0.4145  Value = 0.6858 
## elapsed = 12.04  Round = 3   dropout_1 = 0.2863  dropout_2 = 0.4484  learning_rate = 0.4137  Value = 0.3142 
## elapsed = 9.07   Round = 4   dropout_1 = 0.6181  dropout_2 = 0.6960  learning_rate = 0.3688  Value = 0.3142 
## elapsed = 9.55   Round = 5   dropout_1 = 0.6583  dropout_2 = 0.4590  learning_rate = 0.1524  Value = 0.6858 
## elapsed = 9.22   Round = 6   dropout_1 = 0.0319  dropout_2 = 0.4960  learning_rate = 0.1388  Value = 0.6858 
## elapsed = 9.20   Round = 7   dropout_1 = 0.3697  dropout_2 = 0.3808  learning_rate = 0.2330  Value = 0.6858 
## elapsed = 9.28   Round = 8   dropout_1 = 0.6247  dropout_2 = 0.4159  learning_rate = 0.4660  Value = 0.3142 
## elapsed = 9.25   Round = 9   dropout_1 = 0.3860  dropout_2 = 0.2024  learning_rate = 0.2660  Value = 0.3142 
## elapsed = 9.32   Round = 10  dropout_1 = 0.3196  dropout_2 = 0.1030  learning_rate = 0.8578  Value = 0.6858 
## elapsed = 9.18   Round = 11  dropout_1 = 0.6698  dropout_2 = 0.6741  learning_rate = 0.0458  Value = 0.6106 
## elapsed = 9.32   Round = 12  dropout_1 = 0.3173  dropout_2 = 0.6316  learning_rate = 0.4422  Value = 0.3142 
## elapsed = 9.17   Round = 13  dropout_1 = 0.4743  dropout_2 = 0.4835  learning_rate = 0.7989  Value = 0.3142 
## elapsed = 9.32   Round = 14  dropout_1 = 0.4008  dropout_2 = 0.5568  learning_rate = 0.1219  Value = 0.6858 
## elapsed = 9.22   Round = 15  dropout_1 = 0.0720  dropout_2 = 0.0172  learning_rate = 0.5609  Value = 0.3150 
## elapsed = 9.36   Round = 16  dropout_1 = 0.6299  dropout_2 = 0.3345  learning_rate = 0.2065  Value = 0.6858 
## elapsed = 10.56  Round = 17  dropout_1 = 0.1723  dropout_2 = 0.5309  learning_rate = 0.1275  Value = 0.3142 
## elapsed = 10.20  Round = 18  dropout_1 = 0.0294  dropout_2 = 0.1515  learning_rate = 0.7533  Value = 0.6858 
## elapsed = 9.09   Round = 19  dropout_1 = 0.2295  dropout_2 = 0.2227  learning_rate = 0.8950  Value = 0.3142 
## elapsed = 9.39   Round = 20  dropout_1 = 0.6682  dropout_2 = 0.1621  learning_rate = 0.3745  Value = 0.6858 
## elapsed = 9.23   Round = 21  dropout_1 = 0.3514  dropout_2 = 0.5162  learning_rate = 0.1438  Value = 0.3142 
## elapsed = 9.29   Round = 22  dropout_1 = 0.5720  dropout_2 = 0.6674  learning_rate = 0.6529  Value = 0.3142 
## elapsed = 9.33   Round = 23  dropout_1 = 0.2256  dropout_2 = 0.4531  learning_rate = 0.6687  Value = 0.3142 
## elapsed = 9.36   Round = 24  dropout_1 = 0.4050  dropout_2 = 0.2502  learning_rate = 0.4763  Value = 0.6858 
## elapsed = 9.36   Round = 25  dropout_1 = 0.4652  dropout_2 = 0.1840  learning_rate = 0.5679  Value = 0.3142 
## elapsed = 9.19   Round = 26  dropout_1 = 0.0526  dropout_2 = 0.1706  learning_rate = 0.0004  Value = 0.5314 
## elapsed = 9.16   Round = 27  dropout_1 = 0.4431  dropout_2 = 0.6912  learning_rate = 0.6421  Value = 0.3142 
## elapsed = 9.41   Round = 28  dropout_1 = 0.2738  dropout_2 = 0.2073  learning_rate = 0.5655  Value = 0.3142 
## elapsed = 9.80   Round = 29  dropout_1 = 0.5098  dropout_2 = 0.0046  learning_rate = 0.9460  Value = 0.6858 
## elapsed = 9.51   Round = 30  dropout_1 = 0.1216  dropout_2 = 0.5716  learning_rate = 0.2348  Value = 0.3142 
## elapsed = 9.50   Round = 31  dropout_1 = 0.6767  dropout_2 = 0.4935  learning_rate = 0.1168  Value = 0.3142 
## elapsed = 10.03  Round = 32  dropout_1 = 0.2144  dropout_2 = 0.0273  learning_rate = 0.0392  Value = 0.5663 
## elapsed = 9.48   Round = 33  dropout_1 = 0.1466  dropout_2 = 0.0729  learning_rate = 0.7775  Value = 0.6858 
## elapsed = 9.34   Round = 34  dropout_1 = 0.3303  dropout_2 = 0.1413  learning_rate = 0.8706  Value = 0.3142 
## elapsed = 9.35   Round = 35  dropout_1 = 0.1434  dropout_2 = 0.1947  learning_rate = 0.3141  Value = 0.6858 
## elapsed = 9.71   Round = 36  dropout_1 = 0.4143  dropout_2 = 0.2342  learning_rate = 0.9176  Value = 0.6858 
## elapsed = 10.17  Round = 37  dropout_1 = 0.4765  dropout_2 = 0.0702  learning_rate = 0.1650  Value = 0.3142 
## elapsed = 9.27   Round = 38  dropout_1 = 0.5692  dropout_2 = 0.5528  learning_rate = 0.7848  Value = 0.6858 
## elapsed = 9.30   Round = 39  dropout_1 = 0.4065  dropout_2 = 0.3554  learning_rate = 0.4334  Value = 0.6858 
## elapsed = 9.81   Round = 40  dropout_1 = 0.4110  dropout_2 = 0.6048  learning_rate = 0.9902  Value = 0.3142 
## elapsed = 9.36   Round = 41  dropout_1 = 0.2173  dropout_2 = 0.1987  learning_rate = 0.5948  Value = 0.6858 
## elapsed = 9.44   Round = 42  dropout_1 = 0.6010  dropout_2 = 0.0913  learning_rate = 0.0474  Value = 0.5201 
## elapsed = 9.79   Round = 43  dropout_1 = 0.6168  dropout_2 = 0.6500  learning_rate = 0.7766  Value = 0.3142 
## elapsed = 10.72  Round = 44  dropout_1 = 0.1778  dropout_2 = 0.4973  learning_rate = 0.8318  Value = 0.6858 
## elapsed = 9.47   Round = 45  dropout_1 = 0.0419  dropout_2 = 0.5112  learning_rate = 0.4307  Value = 0.6858 
## elapsed = 9.39   Round = 46  dropout_1 = 0.4410  dropout_2 = 0.4375  learning_rate = 0.7735  Value = 0.5000 
## elapsed = 9.58   Round = 47  dropout_1 = 0.0542  dropout_2 = 0.6527  learning_rate = 0.9407  Value = 0.3142 
## elapsed = 10.16  Round = 48  dropout_1 = 0.5404  dropout_2 = 0.0733  learning_rate = 0.3815  Value = 0.6858 
## elapsed = 9.56   Round = 49  dropout_1 = 0.2811  dropout_2 = 0.5413  learning_rate = 0.1508  Value = 0.3142 
## elapsed = 9.46   Round = 50  dropout_1 = 0.4779  dropout_2 = 0.0370  learning_rate = 0.6664  Value = 0.3142 
## 
##  Best Parameters Found: 
## Round = 2    dropout_1 = 0.5518  dropout_2 = 0.4850  learning_rate = 0.4145  Value = 0.6858
## 646 sec elapsed

The best hyper-parameter so far from Bayesian Optimization with 68.58% accuracy on validation set.

3.2.8 Compare with Particle Swarm Optimization

First we need to readjust the fitness function to suit Particle Swarm Optimization

Let’s run the algorithm. See if it can get better solution than Bayesian Optimization. PSO will run in 100 iterations with 10 particles. If in 10 iterations PSO did not improve, the algorithm stop.

## 1357.17 sec elapsed
## $par
## [1] 0.4742994 0.4008434 0.1029247
## 
## $value
## [1] -0.6861631
## 
## $counts
##  function iteration  restarts 
##       110        11         0 
## 
## $convergence
## [1] 4
## 
## $message
## [1] "Maximal number of iterations without improvement reached"

PSO require more runtime since each function evaluation is heavy. The optimum accuracy (68.616%) is slightly below those of Bayesian Optimization.

4 Conclusion

Bayesian Optimization is a method of optimization that apply probabilistic statistical model that will obtain optimum value with minimal number of function evaluation. It is best suited for problem with costly function evaluation, such as hyper-parameter tuning for deep learning model. Compared to Particle Swarm Optimization, Bayesian optimization perform worse in portofolio optimization with longer runtime, but lower number of function evaluation. Since the function evaluation is not costly in these problem, PSO outperformed the Bayesian. Meanwhile for deep learning, Bayesian optimization outperform PSO runtime with close optimum validation accuracy. Number of iterations may influence the result in Bayesian optimization.

5 Reference